本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
与淘宝和亚马逊等大型平台不同,由于严重的数据分配波动(DDF)问题,在小规模推荐方案中开发CVR模型是更具挑战性的。 DDF防止现有的CVR模型自生效以来,因为1)需要几个月的数据需要足够小的场景训练CVR模型,导致培训和在线服务之间的相当大的分布差异; 2)电子商务促销对小型情景产生了更大的影响,导致即将到期的时间段的不确定性。在这项工作中,我们提出了一种名为MetacVR的新型CVR方法,从Meta学习的角度解决了DDF问题。首先,由特征表示网络(FRN)和输出层组成的基础CVR模型是精心设计和培训的,在几个月内与样品充分设计和培训。然后,我们将不同数据分布的时间段视为不同的场合,并使用相应的样本和预先训练的FRN获得每个场合的正面和负原型。随后,设计了距离度量网络(DMN)以计算每个样本和所有原型之间的距离度量,以便于减轻分布不确定性。最后,我们开发了一个集合预测网络(EPN),该网络(EPN)包含FRN和DMN的输出以进行最终的CVR预测。在这个阶段,我们冻结了FRN并用最近一段时间的样品训练DMN和EPN,因此有效地缓解了分布差异。据我们所知,这是在小规模推荐方案中针对DDF问题的CVR预测第一次研究。实验结果对现实世界数据集验证了我们的MetacVR和Online A / B测试的优越性也表明我们的模型在PCVR上实现了11.92%的令人印象深刻的收益和GMV的8.64%。
translated by 谷歌翻译
促销活动在电子商务平台上变得更加重要和普遍,以吸引客户和提升销售。但是,推荐系统中的点击率(CTR)预测方法无法处理此类情况,因为:1)他们无法概括为服务,因为在线数据分布是不确定的,因为可能正在推出的促销潜在的促销; 2)在不够重视方案信号的情况下,它们无法学习在每个场景中共存的不同特征表示模式。在这项工作中,我们提出了方案自适应混合的专家(相同),这是一个简单而有效的模型,用于促销和正常情况。从技术上讲,它通过采用多个专家来学习专家来遵循专家混合的想法,这些特征表示通过注意机制通过特征门控网络(FGN)进行调制。为了获得高质量的表示,我们设计了一个堆叠的并行关注单元(SPAU),以帮助每个专家更好地处理用户行为序列。为了解决分布不确定性,从时间序列预测的角度精确地设计了一组场景信号,并馈入FGN,其输出与来自每个专家的特征表示连接,以学会注意。因此,特征表示的混合是自适应的场景和用于最终的CTR预测。通过这种方式,每个专家都可以学习鉴别的表示模式。据我们所知,这是第一次推广感知CTR预测的研究。实验结果对现实世界数据集验证了同一的优势。在线A / B测试也表现出同样的促销期间在CTR上的显着增益和5.94%的IPV,分别在正常日内为3.93%和6.57%。
translated by 谷歌翻译
以数据为中心的AI最近被证明更有效和高性能,而传统的以模式为中心的AI提供更少且更少的福利。它强调提高数据集的质量,以实现更好的模型性能。由于其巨大的实用性和越来越多,这一领域具有重要潜力。然而,我们在这一领域没有看到显着的研究进展,特别是在NLP中。我们提出DatacLue,它是第一个在NLP字段中应用的数据中心基准。我们还提供三个简单但有效的基线,以促进该领域的研究(改善宏F1高达5.7%的点)。此外,我们与人类注释者进行全面的实验,并显示了Dataclue的硬度。我们还尝试高级方法:忘记通知的引导标签校正方法。与Datacleue相关的所有资源,包括DataSet,Toolkit,排行榜和Baselines,可在Https://github.com/cluebenchmark/dataclue在线提供
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Unlike traditional distributed machine learning, federated learning stores data locally for training and then aggregates the models on the server, which solves the data security problem that may arise in traditional distributed machine learning. However, during the training process, the transmission of model parameters can impose a significant load on the network bandwidth. It has been pointed out that the vast majority of model parameters are redundant during model parameter transmission. In this paper, we explore the data distribution law of selected partial model parameters on this basis, and propose a deep hierarchical quantization compression algorithm, which further compresses the model and reduces the network load brought by data transmission through the hierarchical quantization of model parameters. And we adopt a dynamic sampling strategy for the selection of clients to accelerate the convergence of the model. Experimental results on different public datasets demonstrate the effectiveness of our algorithm.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译